41 research outputs found

    Combining artificial neural networks and evolution to solve multiobjective knapsack problems

    Get PDF
    The multiobjective knapsack problem (MOKP) is a combinatorial problem that arises in various applications, including resource allocation, computer science and finance. Evolutionary multiobjective optimization algorithms (EMOAs) can be effective in solving MOKPs. Though, they often face difficulties due to the loss of solution diversity and poor scalability. To address those issues, our study [2] proposes to generate candidate solutions by artificial neural networks. This is intended to provide intelligence to the search. As gradient-based learning cannot be used when target values are unknown, neuroevolution is adapted to adjust the neural network parameters. The proposal is implemented within a state-of-the-art EMOA and benchmarked against traditional search operators base on a binary crossover. The obtained experimental results indicate a superior performance of the proposed approach. Furthermore, it is advantageous in terms of scalability and can be readily incorporated into different EMOAs.(undefined

    Neuroevolution for solving multiobjective knapsack problems

    Get PDF
    The multiobjective knapsack problem (MOKP) is an important combinatorial problem that arises in various applications, including resource allocation, computer science and finance. When tackling this problem by evolutionary multiobjective optimization algorithms (EMOAs), it has been demonstrated that traditional recombination operators acting on binary solution representations are susceptible to a loss of diversity and poor scalability. To address those issues, we propose to use artificial neural networks for generating solutions by performing a binary classification of items using the information about their profits and weights. As gradient-based learning cannot be used when target values are unknown, neuroevolution is adapted to adjust the neural network parameters. The main contribution of this study resides in developing a solution encoding and genotype-phenotype mapping for EMOAs to solve MOKPs. The proposal is implemented within a state-of-the-art EMOA and benchmarked against traditional variation operators based on binary crossovers. The obtained experimental results indicate a superior performance of the proposed approach. Furthermore, it is advantageous in terms of scalability and can be readily incorporated into different EMOAs.Portuguese “Fundação para a Ciência e Tecnologia” under grant PEst-C/CTM/LA0025/2013 (Projecto Estratégico - LA 25 - 2013-2014 - Strategic Project - LA 25 - 2013-2014

    Comparing parallel algorithms for van der waals energy with cell-list technique for protein structure prediction / Comparando algoritmos paralelos para energia de van der waals com técnica de lista de células para predição de estrutura de proteína

    Get PDF
    The discovery of the structure of a protein is a difficult and expensive task, because it requires minimizing different energies related to them. The van der Waals energy hás the most expensive evaluation in this context, and computational methods have been developed in this way, such as Genetic Algorithm (GA) and cell-list technique, which reduces its the complexity from O(n2) to O(n). Even with the support of GA and cell lists, the van der Waals energy evaluation still requires a long computing time, even for a small protein. Parallel Computing is capable to reduce the runtime to predict the structure of proteins. Parallel algorithms in such context are usually specific for one programming model and computer architecture, resulting in limited speedups. This paper compares the runtime of three distinct parallel algorithms for the evaluation of an ab initio and full-atom approach based on GA and cell-list technique, in order to minimize the van der Waals energy. The three parallel algorithms are in C and use one of these programming models: MPI, OpenMP or hybrid (MPI+Open MP). Our results show that van der Waals Energy are executed faster and with better speedups when using hybrid and more flexible parallel algorithms to predict the structure of larger proteins. We also show that for small proteins the communication of MPI imposes a high overhead for the parallel execution and, thus the Open MP presents a better relation cost x benefit in such cases

    IDENTIFICATION OF RISK AREAS USING SPATIAL CLUSTERING TO IMPROVE DENGUE MONITORING IN URBAN ENVIRONMENTS

    Get PDF
    Monitoring the occurrence and spread of epidemics is essential for improving decision-making and developing better public policies in urban environments. Besides temporal aspects, it is also essential to evaluate risk areas. However, only a few works in the literature apply spatial analysis of dengue epidemics in Brazil due mainly to a lack of data availability. Additionally, few methodologies available allow for identifying risk areas considering spatial aspects. The main objective of this work was to identify spatial clusters of risk for dengue cases according to the social vulnerability of each area. This constitutes a powerful tool for effective epidemiological and urban management. This work carries out an ecological study that considered dengue cases in São Carlos-SP, Brazil, in the years 2018, 2019 and 2020. The spatial scan technique was applied to classify the risk areas, considering the relative risk (RR) with a confidence interval of 95\% (CI95\%:) and the São Paulo Social Vulnerability Index (IPVS) to characterize these areas. Three clusters were identified in 2018, with high risk relative (RR=28.86), twenty clusters were identified in 2019, with high risk relative (RR=36.26) and five clusters were identified in 2020, with high risk relative (RR=23.32). The highest risk was located in a region with high vulnerability, and the second was in a region with very low vulnerability. These results provide information that allows the targeting of specific control actions from the early detection of cases in places with greater dengue transmissibility.DOI: 10.36558/rsc.v12i3.792

    Manipulation of human verticality using high-definition transcranial direct current stimulation

    Get PDF
    Background: Using conventional tDCS over the temporo-parietal junction (TPJ) we previously reported that it is possible to manipulate subjective visual vertical (SVV) and postural control. We also demonstrated that high-definition tDCS (HD-tDCS) can achieve substantially greater cortical stimulation focality than conventional tDCS. However, it is critical to establish dose-response effects using well-defined protocols with relevance to clinically meaningful applications. Objective: To conduct three pilot studies investigating polarity and intensity-dependent effects of HD-tDCS over the right TPJ on behavioral and physiological outcome measures in healthy subjects. We additionally aimed to establish the feasibility, safety, and tolerability of this stimulation protocol. Methods: We designed three separate randomized, double-blind, crossover phase I clinical trials in different cohorts of healthy adults using the same stimulation protocol. The primary outcome measure for trial 1 was SVV; trial 2, weight-bearing asymmetry (WBA); and trial 3, electroencephalography power spectral density (EEG-PSD). The HD-tDCS montage comprised a single central, and 3 surround electrodes (HD-tDCS3x1) over the right TPJ. For each study, we tested 3x2 min HD-tDCS3x1 at 1, 2 and 3 mA; with anode center, cathode center, or sham stimulation, in random order across days. Results: We found significant SVV deviation relative to baseline, specific to the cathode center condition, with consistent direction and increasing with stimulation intensity. We further showed significant WBA with direction governed by stimulation polarity (cathode center, left asymmetry; anode center, right asymmetry). EEG-PSD in the gamma band was significantly increased at 3 mA under the cathode. Conclusions: The present series of studies provide converging evidence for focal neuromodulation that can modify physiology and have behavioral consequences with clinical potential

    Theoretical Tinnitus framework: A Neurofunctional Model

    No full text
    Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional tinnitus model to indicate that the conscious perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional tinnitus model includes the peripheral auditory system, the thalamus, the limbic system, brain stem, basal ganglia, striatum and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the sourceless sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be associated with aversive stimuli similar to abnormal neural activity in generating the phantom sound. Cognitive and emotional reactions depend on general personality biases toward evaluative conditioning combined with a cognitive-emotional negative appraisal of stimuli such as the case of people with present hypochondria. We acknowledge that the projected Neurofunctional tinnitus model does not cover all tinnitus variations and patients. To support our model, we present evidence from several studies using neuroimaging, electrophysiology, brain lesion, and behavioral technique

    Efficient Forest Data Structure for Evolutionary Algorithms Applied to Network Design

    No full text
    Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)The design of a network is a solution to several engineering and science problems. Several network design problems are known to be NP-hard, and population-based metaheuristics like evolutionary algorithms (EAs) have been largely investigated for such problems. Such optimization methods simultaneously generate a large number of potential solutions to investigate the search space in breadth and, consequently, to avoid local optima. Obtaining a potential solution usually involves the construction and maintenance of several spanning trees, or more generally, spanning forests. To efficiently explore the search space, special data structures have been developed to provide operations that manipulate a set of spanning trees (population). For a tree with n nodes, the most efficient data structures available in the literature require time O(n) to generate a new spanning tree that modifies an existing one and to store the new solution. We propose a new data structure, called node-depth-degree representation (NDDR), and we demonstrate that using this encoding, generating a new spanning forest requires average time O(root n). Experiments with an EA based on NDDR applied to large-scale instances of the degree-constrained minimum spanning tree problem have shown that the implementation adds small constants and lower order terms to the theoretical bound.166829846Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Brazilian Research Foundation of the State of Sao PauloConselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq

    Efficient forest data structure for evolutionary algorithms applied to network design

    No full text
    The design of a network is a solution to several engineering and science problems. Several network design problems are known to be NP-hard, and population-based metaheuristics like evolutionary algorithms (EAs) have been largely investigated for such problems. Such optimization methods simultaneously generate a large number of potential solutions to investigate the search space in breadth and, consequently, to avoid local optima. Obtaining a potential solution usually involves the construction and maintenance of several spanning trees, or more generally, spanning forests. To efficiently explore the search space, special data structures have been developed to provide operations that manipulate a set of spanning trees (population). For a tree with n nodes, the most efficient data structures available in the literature require time O ( n ) to generate a new spanning tree that modifies an existing one and to store the new solution. We propose a new data structure, called node-depth-degree representation (NDDR), and we demonstrate that using this encoding, generating a new spanning forest requires average time O (√ n ). Experiments with an EA based on NDDR applied to large-scale instances of the degree-constrained minimum spanning tree problem have shown that the implementation adds small constants and lower order terms to the theoretical bound166829846CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO - CNPQFUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO - FAPESPSem informaçãoSem informaçã

    A hybrid case adaptation approach for case-based reasoning

    No full text
    Case-Based Reasoning is a methodology for problem solving based on past experiences. This methodology tries to solve a new problem by retrieving and adapting previously known solutions of similar problems. However, retrieved solutions, in general, require adaptations in order to be applied to new contexts. One of the major challenges in Case-Based Reasoning is the development of an efficient methodology for case adaptation. The most widely used form of adaptation employs hand coded adaptation rules, which demands a significant knowledge acquisition and engineering effort. An alternative to overcome the difficulties associated with the acquisition of knowledge for case adaptation has been the use of hybrid approaches and automatic learning algorithms for the acquisition of the knowledge used for the adaptation. We investigate the use of hybrid approaches for case adaptation employing Machine Learning algorithms. The approaches investigated how to automatically learn adaptation knowledge from a case base and apply it to adapt retrieved solutions. In order to verify the potential of the proposed approaches, they are experimentally compared with individual Machine Learning techniques. The results obtained indicate the potential of these approaches as an efficient approach for acquiring case adaptation knowledge. They show that the combination of Instance-Based Learning and Inductive Learning paradigms and the use of a data set of adaptation patterns yield adaptations of the retrieved solutions with high predictive accuracy
    corecore